perm filename IMPURE[BOO,JMC]2 blob sn#525092 filedate 1980-07-23 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00010 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	.IF FALSE THEN BEGIN "NOTES"
C00003 00003	.IF FALSE THEN BEGIN "NOTES-for-theory-chap"
C00006 00004	.s(IMPURE,LISP PROGRAMS WITH SIDE EFFECTS)
C00014 00005	.ss(prog,"Sequential (ALGOL-like) programs.")
C00033 00006	.ss(array, Arrays in LISP)
C00045 00007	.ss(macro, Defining macros in LISP)
C00056 00008	.ss(rplac,Pseudo-functions that modify list structures.)
C00084 00009	.ss(reent,Re-entrant List Structure.)
C00096 00010	.ss(impureex,Exercises.)
C00103 ENDMK
C⊗;
.IF FALSE THEN BEGIN "NOTES"

 "Sequential (ALGOL-like) programs."
    Example needed of iterative program which is clearly better than 
	recursive counterpart

should the space saving version of subst appear somewhere?

simple exercises needed for section ends
	array:picture exercise? array arith?
	macro:case construct pmacdef-def

idea for rplac example: make template, fill it in by rplacing


.END "NOTES"
.IF FALSE THEN BEGIN "NOTES-for-theory-chap"

Pulled from seqential programs section:
[
	We shall not treat the mathematics of sequential programs fully at
this point.  However, we note that under certain conditions
a sequential program that occurs as a term can be replaced by a term involving 
only functional programs.  This means that the theory developed 
in Chapter {SECTION PROVIN} can be applied to such programs.
The main property that a sequential program must have in order to be
replaced when it occurs as a term by a functional program computing its value 
is that the value of any expression containing the program as a term
remains unchanged when the program is replaced by its value.  Or more briefly,
the term must be without (external) side effects.  It is sufficient to
know that all assignments are to variables local to the program 
We will discuss transforming sequential programs into functional programs
and related topics in greater detail in Chapter {SECTION EXTEND}.

	#. For additional discussion of the method for converting sequential
programs into functional programs see McCarthy [1962b].
Algorthims for doing this in the case LISP programs are discussed
in Chapter {SECTION EXTEND} as an example of programs that transform  programs.

]

mention floyd-hoare like semantics and various programming logics: dl,pl/cv,etc.
conversion of seq to rec pgm
elephant repn of progs--prtn proof




.END "NOTES-for-theory-chap"
.s(IMPURE,LISP PROGRAMS WITH SIDE EFFECTS)
.if NOTASSEMBLING then start
.MACHIN: NEXT SECTION!;
.SEARCH: NEXT SECTION!;
.EXTEND: NEXT SECTION!;
.ABSNTX: NEXT SECTION!;
.COMPIL: NEXT SECTION!;
.COMPUT: NEXT SECTION!;
.end


	Pure clean LISP programs such as we described in Chapters
{SECTION READIN} and {SECTION WRITIN} 
simply compute some function of their arguments
and have no effect on the environment in which they are executed.
They admit the elegant mathematical
characterization described and applied in Chapter {SECTION PROVIN}.
Specifically, each recursively defined pure clean LISP program
can be translated into a functional equation and minimization
schema in a first order language, and the equation and schema
can be used to prove that the program meets its extensional
specifications or other mathematical properties of the function
computed by the program.

	However this is not the whole story.  LISP systems also provide 
for the definition of sequential programs in addition to the recursive
definition mechanism.  A sequential program operates by performing
a sequence of operations modifying the "state".    Thus
LISP systems also provide a variety of features for examining 
and modifying the state of the LISP world.  These include assigning
values to variables, moregenerally modifying the "property list" of
an atom, destructive (as opposed to ⊗⊗cons⊗tructive) operations
on list structure, even examining and
modifying the LISP control mechanisms.  

A sequential program can be viewed as computing a function by designating 
some property of the final state of the computation (such as the value of 
some variable) as the output.   In general a LISP program will be used 
both to compute a function and to cause "side-effects".
By using programs with side-effects in a larger program one can
often compute answers more conveniently than in pure LISP.  
Side-effects can be used as a means of communication among a group
of programs acting jointly to carry out a computation.

	The mathematical properties of general side-effects are not well 
understood, although some techniques for treating certain classes of 
"well-structured" sequential programs have been developed.
Programs that take totally uninhibited advantage of side-effects
are usually difficult to understand, debug and modify.
This has led to experienced programmers to use these features in moderation.  
That such programs are not accessible to present proof techniques
is of secondary interest to most applied programmers.
In the future we expect that language designers will reduce the need
for side-effect programming by providing more direct means for
achieving their results,
and programmers will come to see debugging a correctness proof
as a more worthwile activity than debugging a program by examples, because it
terminates with a conclusive proof that the program meets its specifications.

	In this chapter we shall describe sequential programs in LISP and
some additional features for side-effect programming such as arrays,
property list operations, and destructive list structure operations.
Every computation that can be done with
these features can be done in pure clean LISP, but nevertheless there
are often good reasons for using these features.  We shall examine
the features themselves and also the criteria that determine when
they should be used in preference to pure clean LISP.
Ideas for mathematical treatment of sequential programs and side-effects
will be presented in a later chapter.  We also postpone discussion of
features for examining and modifying the LISP control structure until
Chapter {section MACHIN}.

.ss(prog,"Sequential (ALGOL-like) programs.")
.begin "seq" turnon "∂"
.n←20


	Sequential programs are impure (by definition),
but can be clean - provided
certain restrictions are observed.  The external notation for
sequential programs is adapted from that of ALGOL 60.  

	As an example, we might write ⊗reverse as follows:

.begin nofill select 2
.n←20
.group

∂(n)	  reverse u ← 
∂(n+4)	    qprog[[v]
∂(n+8)		v ← qNIL;
!fcnreverse&prog   ∂(n+4)a:∂(n+8)qif qn u qthen qreturn v;
∂(n+8)		v ← qa u . v;
∂(n+8)		u ← qd u;
∂(n+8)	        qgo a].
.end

	The internal form of the same program is

.begin nofill select 6
.n←20
.group

∂(n)	(DEFUN REVERSE (U) 
∂(n+4)     (PROG (V)
∂(n+8)          (SETQ V NIL)
∂(n+8)	        A
∂(n+8)	        (COND ((NULL U) (RETURN V)))
∂(n+8)	        (SETQ V (CONS (CAR U) V))
∂(n+8)	        (SETQ U (CDR U))
∂(n+8)	        (GO A)))
.end

where the paragraphing is only for the reader's convenience.  

	Sequential programs are introduced in LISP by allowing
as a term an expression of the (external) form

∂(n)qprog[[<variable list>] <statement list>],

where <variable list> is a list of variables local to the
program, and <statement list> is a list of the statements of
the program.  In external notation, as in ALGOL 60, the statements 
are separated by semicolons, and any statement may be preceded by a 
label followed by a colon.

	The statements are of the following kinds:

	1. Assignment statements of the form

∂(n)<left hand side> ← <right hand side>,

where <left hand side> is a variable, possibly subscripted, and
<right hand side> is a LISP expression that can be evaluated.   

	2. qgo statements of the form

∂(n)qgo <label>

where <label> is  an expression that evaluates to a
label.   Since labels are atoms in internal notation, any expression
evaluating to an atom may be used, but the usual case is a conditional
expression wherein the second element of each pair is an atom.
Should the resulting expression not be an atom or should that atom
not be used as a label, an error message will be generated.

	3. Conditional statements which have the form

∂(n)qif <p1> qthen <s1> qelse qif ... qelse qif <pn> qthen <sn>,

where the <pi> are propositional terms having truth values,
and the <si> are any statements.
Notice that conditional statements terminate
with a qthen clause as do conditional expressions in general in internal form
If none of the propositional terms are true the statement has no effect.
(Unless of course there were side effects in the evaluation of the propositional
terms.)


	4. qreturn statements of the form

∂(n)qreturn <expression>

where <expression> is an arbitrary LISP term.  The effect of executing this
statement is to return from the program giving the program as a term
the value of <expression>.  


	In general the internal form of a program term is the following:
.once center

($PROG  <variable list> <s1> ... <sn>)

and the four types of statements have the forms:
.begin nofill turnon "∂"
.n←20

∂(n)($SETQ <variable> <expression>)
∂(n)($GO <label>)
∂(n)($COND (<p1> <s1>) ... (<pn> <sn>))
∂(n)($RETURN <expression>)

.end


In addition a $PROG statement may be an atom which is interpreted as a label.
($GO <label>) transfers control to the list of statements beginning with
the statement following <label>.  Thus internal notation is again more 
uniform than external notation in treating labels as elements of the
statement list rather than as attachments to statements.

	In some LISP implementations arrays are treated specially and subscripted 
variables are not allowed as first arguments to $SETQ.  The $STORE command must be 
used  in order to change an array element.  We will say more about arrays in the 
next section.


	As further examples of how qprog might be used, here are some ways we 
might write ⊗append: 

!fcnappend&prog1 ⊗⊗⊗u * v ← qprog[[] qreturn qif qn u qthen v qelse qa u . [qd u * v]]⊗,

is just a trivial rewrite of the recursive definition.
.begin nofill select 2 turnon "∂"
.n←20
.group

∂(n)u * v ← 
∂(n+4)  qprog[[w]
!fcnappend&prog2 ∂(n+8)	    qif qn u qthen qreturn v;
∂(n+8)      w ← qa u . [qd u * v];
∂(n+8)      qreturn w]

.end
is almost as close to the pure LISP form.  If we want to replace the
recursion by a loop, we can write

.begin nofill select 2 turnon "∂"
.n←20
.group

∂(n)u * v ← 
∂(n+4)  qprog[[u1,v1,w]
∂(n+8)      w ← qNIL;
∂(n+8)      u1 ← u;
∂(n+8)      v1 ← v;
∂(n+4)a:∂(n+8)qif qn u1 qthen qgo b;
!fcnappend&prog3 ∂(n+8)      w ← qa u1 . w;
∂(n+8)      u1 ← qd u;
∂(n+8)      qgo a;
∂(n+4)b:∂(n+8)qif qn w qthen qreturn v1;
∂(n+8)      v1 ← qa w . v1;
∂(n+8)      w ← qd w;
∂(n+8)      qgo b].

.end
This corresponds to the recursive program

.begin nofill select 2 turnon "∂"
.n←20
.group


∂(n)u * v ← app[u,v,qnil]
!fcnappend&pure3  
∂(n)app[u,v,w] ← 
∂(n+4)  qif qn u qthen [qif qn w qthen v qelse app[u,qa w . v,qd w]]
∂(n+4)  qelse app[qd u,v,qa u . w]

.end

which would have essentially the same "computation sequence" as the
sequential program if it is compiled or interpreted iteratively
(without using the stack).

	As a final example we have a sequential program for computing
the partitions of a list.  A recursive program was given and proved
correct in Chapter {section PROVIN}.  This example illustrates the
difference between "top-down" (recursive) and "bottom-up" (sequential)
thinking.   Recall that the criterion for ⊗W to be a partition of a 
list ⊗u into ⊗n parts is

⊗⊗⊗∀W u n.(Ispartn[W,u,n] ≡ ∃uul.(W=uul ∧ length uul=n ∧ combine uul=u)).⊗

To write the recursive program ⊗partn[u,n] we listed the partitions 
for that base cases where ⊗⊗u=qNIL⊗ or ⊗⊗n=0⊗.  Then we figured out
how each partition could be constructed from a partition of a sublist
of ⊗u (namely ⊗⊗qd u⊗) and put it all together in a recursive definition.
An alternate approach is to define a linear ordering on the set of 
partitions of a list ⊗u into ⊗n parts and then write a program to
enumerate them in order.  For example, we could begin with the smallest
and program a "successor" operation giving the next partition in the
ordering. Since there are only a finite number of partitions, the
program will eventually terminate with the largest.

	We order partitions as follows: the smallest is obtained by
making the first ⊗n-1 elements of the partition lists of length ⊗1 
and the ⊗⊗n⊗th element what ever is left after deleting the first
⊗n-1 members of ⊗u.  The largest partition is a list in which the
the first element is the list containing the first ⊗m elements of
⊗u, where ⊗⊗m=length u-(n-1)⊗ and the remaining elements are one element
lists.  In general, the partition ⊗w is greater than the partition ⊗v if
at the first element where they disagree the element in ⊗w is longer than
that of ⊗v.   The program ⊗prtn given below enumerates partitions
in this order.  The auxiliary ⊗maxpart implements the definition of
biggest partition given above.  The loop $A completes an initial
fragment of a partition by adding lists of one element until the
⊗⊗n⊗th element is reached.  The remainder of ⊗u (stored in ⊗⊗v⊗)
is then added and the completed partition is added to the collector
⊗pl.  The loop $B is used to back up thru a partition until the
first position where the length of the entry can be incremented is found.


.begin nofill select 2 turnon "∂"
.n←20
.group

∂(n)prtn[u, n] ← 
∂(n+4)  qprog [pl, p, v, m]
∂(n+8)    qif qn u ∧ n = 0 qthen return <qNIL>
∂(n+8)    qelse qif n = 0 qthen return qNIL
∂(n+8)    qelse qif length u < n qthen return qNIL
∂(n+8)    pl ← qNIL
∂(n+8)    p ← qNIL
∂(n+8)    v ← u
∂(n+8)    m ← n
∂(n+4)$$A:$∂(n+8) qif m = 1 qthen [pl ← reverse v . p . pl, qgo $$B$]
!fcnprtn&prog ∂(n+8)    p ← <qa v> . p
∂(n+8)    v ← qd v
∂(n+8)    m ← sub1 m
∂(n+8)    qgo $$A$
∂(n+4)$$B:$∂(n+8) qif maxpart[qa pl, u, n] qthen return pl
∂(n+8)    qelse qif m < length v qthen 
∂(n+8)    [p ← [qa p * <qa v>] . qd p, v ← qd v, qgo $$A$]
∂(n+8)    v ← [qa p * v]
∂(n+8)    p ← qd p
∂(n+8)    m ← add1 m
∂(n+8)    qgo $$B$

∂(n)maxpart[w, u, n] ← length u < length qa w + n

.end

	In a later chapter we will see how this program can be proved
to satisfy the same specification as the recursive version.  

	As promised, here are some circumstances in which sequential 
programs might be preferred to functional programs:
.item←0

	#. In some cases and for some people, it is easier to think
about how to go from the initial conditions step-by-step to the final
result than to think about how the final result can be obtained from
easier cases.  The two programs for computing partitions are examples
of these two styles of thinking and programming.
There seems to be a matter of taste to a substantial
extent, though the functional form seems to have clear conceptual
advantages for functions like ⊗append and ⊗subst.  

	#. When we are considering a program that interacts with
an environment instead of producing an answer, the sequential form
often seems more natural.

	#. When there are a large number of variables, it can turn
out that the necessary functions have an unwieldy number of arguments.
Frequently in such cases only a few of the variables actually change
in any "loop" , so it is inefficient to pass them all around.

	#. A program for carrying out a test by searching a space of
test cases may make a decision at some intermediate point and wish
to abort the search at this point.  With out the aid of special
control mechanisms (such as ⊗catch and ⊗throw to be discussed in
Chapter {section MACHIN}) the explicit loop structure of a sequential
program is needed to do this efficiently.

Remarks:
.item←0

	#. Although qreturn  and qgo statements in LISP have no
meaning unless they occur inside a qprog, an 
assignment statement may be regarded as an program term.
Thus ⊗⊗[x_←_qa y]_._z⊗ is an abbreviation for
⊗⊗qprog[[]_x_←_qa y;_qreturn x]_._z⊗.

	#. Besides $SETQ, most LISPs allow assignments of the
form  ($SET <exp1> <exp2>),
where <exp1> is evaluated to determine to what variable the assignment
is to be made.  If <exp1> doesn't evaluate to a variable, an error
is signalled.

	#.  LISPs generally have special form of the $PROG construct
for frequently used special cases.  For example 

⊗⊗⊗$$(PROG1 <s1> <s2> ._._. <sn>)$⊗

does statements $1 to $n in order and returns the value returned
by $<s1> and

⊗⊗⊗$$(PROGN <s1> <s2> ._._. <sn>)$⊗

does statements $1 to $n in order and returns the value returned
by $sn.  

	#. In the conditional statement it is convenient to allow a
list of statements as the qthen clause rather than a single statement.
This often allows a more natural organization of the program
and causes no difficulty in the rewriting of clean sequential programs as
functional programs.  In fact LISP systems usually allow this more general
form of conditional, thus the conditional has the general form
.skip
.begin center
($COND <p1 e11 ... e1k>  ... <pn en1 ... enm>)
.end
The value returned in the case one is wanted is the value of the last <eij>
evaluated.  This allows the simulation of a flowchart description of 
a sequential program with out expliticly labeling the sequence of 
instructions that is to follow a particular branch.

	#. In writing qprogs in external form we will often use the convention
that each statement is written on a separate line.  This means that the ";"
separator in not needed and may be omitted in this case.

.end "seq"
.ss(array, Arrays in LISP)

	As mentioned in section qsect{subsection prog} 
in addition to simple variables which may be 
assigned values using the assignment statement, LISP admits arrays.  An array 
may in principle have any number of dimensions.  If ⊗foo is a 2-dimensional 
(say 3 x 4)
array then ⊗⊗foo[i,j]⊗ can be thought of as a complicated variable name.   
It evaluates to the jth element of the ith row of ⊗foo and
it can be assigned a value using the function ⊗store.  Thus
⊗⊗store[foo,<2,3>,$$APPLE$]⊗ assigns to row_2 column_3 of ⊗foo the value $$APPLE$.
Recall that in external form we allowed subscripted variables in assignment 
statements. Thus ⊗⊗store[foo,<i,j>,e]⊗ can also be written ⊗⊗foo[i,j]_←_e⊗.
(⊗⊗foo[i,j]⊗ appearing on the left of an assignment denotes the position in the
array while in other contexts it denotes the contents of that position.)  
It would not in principle be difficult to modify the $SETQ of LISP 
to accept subscripted 
variables, although there may be practical reasons for not doing so.

	Before using an array, it must be declared to LISP.   
$$(ARRAY FOO T 3 4)$ tells the 
LISP system that the atom $FOO is to denote a 3 x 4 array of S-expressions.
(The third item qT specifies the type of the array. There are other
possibilities for type such as $FIXNUM or $FLONUM in MACLISP.  The form of the
array declaration may also vary with implementation.)
The internal forms for the above expressions for accessing and storing into 
arrays are $(FOO_I_J)  and  $(STORE_(FOO_2_3)_'APPLE).  
Note that accessing an array element has the same form as a function call.
LISP knows that $FOO is an array because you declared it to be one using the
$ARRAY command.  This is analogous to telling LISP about a program name
using the $DEFUN command.

	In addition to accessing and storing primitives for operating on
arrays, LISP usually provides a means of convertings lists to arrays
and vice versa.  In MACLISP they are called $FILLARRAY and $LISTARRAY.  
The following conversation with MACLISP will declare a fixnum array
$TIC and fill it with alternating $$1$s and $$0$s.

.begin nofill turnon "∂"
.n←30;m←10

∂(m)user:∂(n)$$(ARRAY TIC FIXNUM 3 3)$
∂(m)LISP:∂(n)$$TIC$

∂(m)user:∂(n)$$(FILLARRAY 'TIC '(1 0 1 0 1 0 1 0 1))$
∂(m)LISP:∂(n)$$TIC $

∂(m)user:∂(n)$$(LISTARRAY 'TIC)$
∂(m)LISP:∂(n)$$(1 0 1 0 1 0 1 0 1)$

∂(m)user:∂(n)$$(TIC 1 0)$
∂(m)LISP:∂(n)$$0 $

∂(m)user:∂(n)$$(TIC 1 1)$
∂(m)LISP:∂(n)$$1 $
.end

Notice that (i) the value of $FILLARRAY is the name of the array and
not the array and (ii) the indices start at 0.
Note also that general arrays (of type $$T$) can arbitrary entries including
array names and arrays pointers.

Arrays are are practical for two reasons.  One is quick access to individual
elements, the other is that they are updated rather than copied which saves
space.   One could compare the use of lists vs arrays for storing data
to the use of tapes vs disks.   Lists are somewhat more flexible than tape
in that you can splice and chop somewhat more readily 
but the sequential access vs direct access analogy is good.  

In LISP arrays might be used to represent game boards in game playing
programs, or to represent a picture as a "pixel" array.  We will
see an exmple of the former in Chapter {section SEARCH}.  As to the
latter, the program ⊗picture_find scans a ⊗picture represented as an
array (say of colors) for all and returns a list of all locations
where a given picture specification ⊗pat is satisfied.  The program
⊗pmatch checks to see if the specification ⊗pat is satisfied at
⊗pic[i,j].  A specification is a list of triples ⊗⊗<rd cd pred>⊗.
It is satisfied at ⊗pic[i,j] if each of the triples is.  A triple
is satisfied if the location ⊗<i+rd,j+cd> is within the picture and
⊗⊗pred[pic[i+rd,j+cd]]⊗ holds.

.begin nofill select 2 turnon "∂"
.n←15
.group

∂(n)pmatch[pat, pic, i, j] ← 
∂(n+4)qprog [s, p, r, c, m, n]
∂(n+8)  m ← sub1 qad arraydims pic
∂(n+8)  n ← sub1 qadd arraydims pic
∂(n+8)  p ← pat
∂(n+4)$$LOOP:$
∂(n+8)  qif qn p qthen return qT
∂(n+8)  s ← qa p
∂(n+8)  qa s
∂(n+8)  qif r < 0 ∨ m < r qthen return qNIL
∂(n+8)  c ← j + qad s
∂(n+8)  qif c < 0 ∨ n < c qthen return qNIL
∂(n+8)  qif apply[qadd s, <apply[pic, <r, c>]>] qthen [p ← qd p, qgo $$LOOP$]
∂(n+8)  return qNIL

!fcnpicture&  

∂(n)picture_find[pic, pat] ← 
∂(n+4)prog [i, j, m, n, locs]
∂(n+8)  m ← sub1 qad arraydims pic
∂(n+8)  n ← sub1 qadd arraydims pic
∂(n+8)  i ← 0
∂(n+8)  j ← 0
∂(n+8)  locs ← qNIL
∂(n+4)$$LOOP:$
∂(n+8)  qif pmatch[pat, pic, i, j] qthen locs ← [i . j] . locs
∂(n+8)  qif j < n qthen [j ← add1 j, qgo $$LOOP$]
∂(n+8)  qif i < m qthen [i ← add1 i, j ← 0, qgo $$LOOP$]
∂(n+8)  return locs

.end

For example if we define the array $PIX as follows

.begin nofill select 6
.indent 15,15
.group

(ARRAY PIX T 6 6)

(FILLARRAY 'PIX
	(W R W G W B)
	(R W G W B W)
	(W G W B W R)
	(G W B W R W)
	(W B W R W G)
	(B W R W G W) )

.end

and the specification $PAT as follows

.begin nofill select 6
.indent 15,15
.group

(SETQ S1 '(0 0 (FUNCTION (LAMBDA (X) (EQ X 'W))) ) )
(SETQ S2 '(0 1 (FUNCTION (LAMBDA (X) (EQ X 'R))) ) )
(SETQ S3 '(1 0 (FUNCTION (LAMBDA (X) (EQ X 'R))) ) )
(SETQ S4 '(-1 0 (FUNCTION (LAMBDA (X) (EQ X 'B))) ) )
(SETQ S5 '(0 -1 (FUNCTION (LAMBDA (X) (EQ X 'B))) ) )

(SETQ PAT (LIST S1 S2 S3 S4 S5))

.end

then the list of locations satifying the specification is obtained by

.begin nofill select 6
.indent 15,15
.group

(PATTERN_FIND 'PIX PAT)

((4 . 2) (3 . 3) (2 . 4))

.end

	We can begin to characterize arrays in LISP by 
modifying  the characterization of assignment and contents functions on  
state vectors given in McCarthy [1962b].   Thus we have functions 
⊗⊗a[array,pos,exp]⊗ and ⊗⊗c[array,pos]⊗ where ⊗a returns the array
resulting from ⊗⊗store[array,pos,exp]⊗ (in LISP ⊗store returns the value of
⊗⊗exp⊗) and ⊗c returns the contents of ⊗array at ⊗pos (like writing 
⊗⊗array[pos]⊗).  The functions ⊗a and ⊗c satisfy  the following relations
.begin nofill turnon "∂"
.n←8;m←40

!eqnarray1 ∂(n)⊗⊗c[a[array,pos,exp],pos1] = qif pos=pos1 qthen exp qelse c[array,pos1]⊗

!eqnarray2 ∂(n)⊗⊗array = a[array,pos,c[array,pos]]⊗

!eqnarray3 ∂(n)⊗⊗a[a[array,pos0,exp0],pos1,exp1] = ∂(m)qif pos0=pos1 qthen a[array,pos0,exp1]⊗
              ∂(m)⊗⊗qelse a[a[array,pos1,exp1],pos0,exp0]⊗
.end

These equations characterize the abstract notion of array.  
We also need to extend
the notion of equality to arrays in the obvious way.  (Namely two arrays
are equal if and only if the they have the same set of allowed positions and
the contents of any allowed position is the same for both arrays.)
They do not
deal with the fact that the ⊗store function of LISP destroys the old array
in the process of creating the new version.  
They can be used to prove propertied of programs that have array type arguments.


For additional array functions and precise specification of
features the reader should consult the
programmers manual for the version of LISP being used.

.ss(macro, Defining macros in LISP)

	A macro is a form of shorthand or "syntactic sugar".  A macro
definition tells LISP that when it sees the defined form 
it should first "decode" it obtaining the intended expression and
then evaluate the result.  For example, we might wish to have a genuine
%3if-then-else%* conditional.  Thus we would write

⊗⊗⊗$$(IF (ATOM X) (LIST X) 'NOT-AN-ATOM-X)$⊗

and the expression evaluated would be

⊗⊗⊗$$(COND ((ATOM X) (LIST X)) (T 'NOT-AN-ATOM-X)).$⊗

The reason for using macros is generally to write programs that are more
compact and suggestive of their intent.  For example defining 

⊗⊗⊗$$(BODY <e>)$⊗

which returns the body of a λ-expression to be

⊗⊗⊗$$(CADDR <e>)$⊗

and other such mnemonic devices might make ⊗eval more readable.
You might ask why not just define the appropriate functions?   There
are a couple of reasons.  One is that compilers can be made to
expand simple macros before compiling the code, the eliminating
one or more levels of function call for each instance.
Also in the case of $IF a function definition (of the ordinary sort)
would not work as only one of the latter two arguments to $IF 
should be evaluated not both.   It is also the case that the 
treatment of macros reflects the intent of such definitions
more accurately that function definitions.

	The basic macro facility of LISP is very simple.  A definition
has the form 

⊗⊗⊗$$(DEFUN <macname> MACRO (<var>) <macbody>).$⊗

This has the effect of associating with $<macname> a $MACRO property
which is a λ-expression with a single variable $<var> and body
$<macbody>.  When LISP encounters a term whose first element is 
an atom with a $MACRO property then $<macbody> is evaluated with $<var> 
bound to the entire term.  This produces the expression whose value 
is desired and that expression is evaluated to determine the value of the
term.  For example the $BODY macro is defined by

⊗⊗⊗$$(DEFUN BODY MACRO (L) (LIST 'CADDR (CADR L)) )$⊗

and the $IF macro by

⊗⊗⊗$$(DEFUN IF MACRO (L) (LIST 'COND (LIST (CADR L) (CADDR L)) (LIST T (CADDDR L))))$⊗

Then when LISP encounters 

⊗⊗⊗$$(BODY '(LAMBDA (X) X))$⊗

it will macro expand this to 

⊗⊗⊗$$(CADDR '(LAMBDA (X) X))$⊗

which then evaluates to $X.  When it encounters 

⊗⊗⊗$$((LAMBDA (X) (IF (ATOM X) (LIST X) 'NOT-AN-ATOM)) 'AB)$⊗

It will evaluate

⊗⊗⊗$$(COND ((ATOM X) (LIST X)) (T 'NOT-AN-ATOM))$⊗

in an environment with $X bound to $AB and return $(AB).

As a slightly more complex example consider extending ⊗cons to
an arbitrary number of arguments by right associating.  Thus
⊗rcons[x1,x2,x3] stands for ⊗cons[x1,cons[x2,x3]].  This can
be done by the following macro definition:

.begin nofill select 6 group
.indent 12,12

(DEFUN RCONS MACRO (L)
  (COND ((NULL (CDDR L)) (CADR L))
	(T (LIST 'CONS (CADR L) (CONS 'RCONS (CDDR L)))) ))

.end

Several of the built-in LISP functions that take arbitrary numbers
of arguments are in fact implemented as macros based on functions
taking 2 arguments.  $PLUS and $TIMES are two examples.

	Although the macro facility is very powerful (we have not even 
touched the surface with these examples) it is rather clumsy to use
for the simple kinds of macros we have been discussing.  A simple but useful
form of macro definition is the following:

⊗⊗⊗$$(MACDEF <macform> <macbody>).$⊗

Where $<macform> is a list with the macro name as the first element
and parameters following.  When a macro defined in this way is
encountered the actual arguments are substituted for the parameters
in $<macbody> and the resulting expression evaluated for the value.
The $BODY and $IF macros can be defined this way as follows:

⊗⊗⊗$$(MACDEF (IF A B C) (COND (A B) (T C)))$⊗

⊗⊗⊗$$(MACDEF (BODY E) (CADDR E))$⊗

Much easier!  The $MACDEF feature can be implemented as a macro-macro
which converts the higher level definition into a low level definition
(now we are beginning to see some of the power) as follows:

.begin nofill select 6 group
.turnoff "%"
.indent 10,10

(DEFUN MACDEF MACRO (L)
  (LIST 'DEFUN 
	(CAADR L)
	'MACRO 
	'(%L) 
	(LIST 'SUBLIS 
	      (LIST 'PRUP (LIST 'QUOTE (CDADR L)) (LIST 'CDR '%L) )
	      (LIST 'QUOTE (CADDR L))) ))

.turnon
.end

where $PRUP is the program for converting a pair of lists into an a-list
given in Chapter {subsection evaluator!}.

	A somewhat more sophisticated macro-macro has the form

⊗⊗⊗$$(PMACDEF <macpattern> <macbody>)$⊗

Where $<macpattern> has the form $$(<macname> <p1> ._._. <pn>)$
and the $<pi> are elementary patterns which are either atoms whose pname
begins with $?, which match an arbitrary atom, atoms whose
pname begins with $* which match list segments or any other atoms
which match only themselves.  When a term having a macro definition
of this form is encountered, the argument list is matched to the pattern
sequence.   Elements matching $? variables are substituted for the
corresponding variable in $<macbody> while list segments matching
$* variables are spliced into lists where the corresponding variable
occurs in $<macbody>.  For example

⊗⊗⊗$$(PMACDEF (LET *vars BE *vals IN ?body) ((LAMBDA (*vars) ?body) *vals))$⊗

implements the usual $LET construct for variable scoping.

The $PMACDEF facility can be implemented as a macro-macro written
in terms of a suitable pattern matcher and pattern instantiater.
Clearly the more sophisticated the pattern matcher is, the more
sophisticated this macro definition form will be.  We will have
more to say about this in the chapter on pattern matching.

[Remark: $MACDEF and $PMACDEF are simplified versions of macro-macros
available in MACLISP.]

As we see, macros provide the programmer with the ability to implement
both data structures and high level program constructs.  
One possible point of view in language design is to provide a very
simple basic language (such as pure LISP) and a powerful macro
definition facility interms of which a higher level languages can
be implemented.  With a good compiler this need not reduce the efficiency
of compiled programs and provides the programmer with great flexibility.
The RABBIT compiler for SCHEME [Steele 1978] implements this point of
view.

.cb Exercises
.item ← 0

#.  What is the value of $$(RCONS 'A 'B 'C)$?

#.  Write a definition for the macro $LCONS that applies $CONS to
an arbitrary number of arguments using left association rather than
right association.  Thus ⊗lcons[x1,x2,x3]=cons[cons[x1,x2],x3].  

.ss(rplac,Pseudo-functions that modify list structures.)


	In pure LISP, list structure is never changed.  ⊗car and ⊗cdr merely
move about in a list structure, and ⊗cons creates new list structure
from the free storage list.  Of course, a variable can get a new value
that corresponds, for instance, to putting an element on the end of
a list, but in pure LISP this can only be accomplished by creating
new list structure.  In this section we will discuss operations that
actually modify list structure.  They often have efficiency advantages,
but their use interferes with saving memory by using merging list
structure, and techniques for proving correctness of such
programs are quite undeveloped.

	The simplest way to express operations that modify list
structure is in the form of assignments to %2car-cdr%1
chains of variables, e.g. we may write 
⊗⊗qada x_←_y_._z⊗.
The effect of executing this statement is to replace the qada  part
of the list structure pointed to by ⊗x with the value of the right
hand side.  As a term, the statement takes the updated value of the
first argument. In internal notation, we use the pseudo-functions 
$$(RPLACA_X_Y)$ and $$(RPLACD_X_Y)$ which correspond to the statements
⊗⊗qa x_←_y⊗ and ⊗⊗qd x_←_y⊗ respectively.

	$RPLACA and $RPLACD are highly unclean.  The effect of
evaluating an expression involving $RPLACA or $RPLACD 
depends on the state of list structure and not merely on the S-expressions
that the variables have as values.  Consider the programs ⊗test1 and
⊗test2 given by

.begin nofill select 2 turnon "∂"
.n←20
.group

∂(n)test1[] ← 
∂(n+4)  qprog[[]
∂(n+8)      x ← $$(A B)$;
∂(n+8)      y ← $$(A B)$;
∂(n+8)      z ← [qad x ← $$C$] ]

∂(n)test2[] ← 
∂(n+4)  qprog[[]
∂(n+8)      x ← $$(A B)$;
∂(n+8)      y ← x;
∂(n+8)      z ← [qad x ← $$C$]]

.end

After executing ⊗test1 we have ⊗⊗x=$$(A_C)$⊗, ⊗⊗y=$$(A_B)$⊗, and ⊗⊗z=$$(C)$⊗,
while
after executing ⊗test2 we have ⊗⊗x=$$(A_C)$⊗, ⊗⊗y=$$(A_C)$⊗, and ⊗⊗z=$$(C)$⊗.
This is because in the first case ⊗x and ⊗y are different list-structures,
so modifying doesn't change ⊗y. In the second case ⊗x and ⊗y are the same
list-structures (e.g. ⊗⊗x_qeq_y⊗) so any modification of ⊗x also affects ⊗y.  
In fact if ⊗x and ⊗y are not the same, but share some substructure, 
any modification of that structure will affect both ⊗x and ⊗y.  
The uncleanliness of $RPLACA and $RPLACD means that the programmer
must know exactly what list structures merge.  

Here are the internal forms of the test programs in case you are unsure
of the notations:
.begin nofill select 6
.indent 15,15
.group

(DEFUN TEST1 ()
  (PROG ()
    (SETQ X '(A B))
    (SETQ Y '(A B))
    (SETQ Z (RPLACA (CDR X) 'C))
  ))

(DEFUN TEST2 ()
  (PROG ()
    (SETQ X '(A B))
    (SETQ Y X)
    (SETQ Z (RPLACA (CDR X) 'C))
  ))

.end

A typical application is the pseudo-function %2nconc%1 defined by
.begin nofill select 2 turnon "∂"
.n←20
.group


∂(n)nconc[u,v] ← 
∂(n+4)  qprog[[w]
∂(n+8)	    qif qn u qthen qreturn v;
∂(n+8)      w ← u;
!fcnnconc&1 ∂(n+4)loop:
∂(n+8)      qif qn qd w qthen [qd w ← v; qreturn u];
∂(n+8)	    w ← qd w;
∂(n+8)	    qgo loop].

.end
⊗nconc can also be given a definition that has the form of a recursive
program, namely
.begin nofill select 2 turnon "∂"
.n←20
.group

∂(n)nconc[u, v] ← qif qn u qthen v qelse nconc1[u, u, v]
!fcnnconc&2  
∂(n)nconc1[u, w, v] ← 
∂(n+4)qif qn qd w qthen [λside: u][rplacd[w, v]] 
∂(n+4)qelse nconc1[u, qd w, v].

.end

	The value of ⊗⊗nconc[u,_v]⊗ is just ⊗u_*_v, but it achieves
this result by modifying the last element of ⊗u to point to ⊗v.  It
is typically used when no other variable points to a list structure
that merges into ⊗u, and the old value of ⊗u will not be used
again.  In that case, ⊗nconc is advantageous, because it does no
⊗⊗cons⊗es, and thus can't initiate garbage collection.  No formal
methods exist at present for proving that these conditions are met.

	$RPLACD can also be used to insert an element into the
middle of a list.  Suppose, for example, that we wish to insert
the atom $B after every occurrence of the atom $A in a list ⊗u.  We
can define a pure LISP function that gives a list obtained from
the list ⊗u by inserting such $$B$'s as follows:

!fcninsertb&1 ⊗⊗⊗    insertb u ← qif qn u qthen qnil qelse qif qa u = $A qthen
$A . [$B . insertb qd u] qelse qa u . insertb qd u.⊗

Computing ⊗⊗insertb_u⊗ does not change the value of ⊗u, because
new list structure is manufactured.  On the other hand, if we define

.begin nofill select 2 turnon "∂"
.n←20
.group


∂(n)insertb u ← 
∂(n+4)qprog[[w];
∂(n+8)w ← u;
!fcninsertb&2 ∂(n+4)loop:
∂(n+8)qif qn w qthen qreturn u;
∂(n+8)qif qa w = $A qthen [qd w ← $B . qd w; w ← qd w];
∂(n+8)w ← qd w;
∂(n+8)qgo loop],

.end
we get a program that does the actual insertions.  It is more efficient
because it uses fewer ⊗⊗cons⊗es, but it will damage any list
structure that merges with the structure representing ⊗u, and it is
mathematically recalcitrant.

	Finally we have two examples of programs destructively
removing items from a list.  The program ⊗remv deletes all occurrences
of ⊗x from the list ⊗u.  To remove something from a list destructively
the ⊗cdr of the previous element is rplaced by the pointer to the
next element.  If there is no previous element, the list is replaced by
its ⊗cdr.  Thus the two loops of ⊗remv.  

.begin nofill select 2 turnon "∂"
.n←20
.group

∂(n)remv[x,u]
∂(n+4)qprog[[v]
∂(n+4)$$LEADERS$:
∂(n+8)qif qn u qthen qreturn u;
∂(n+8)qif qa u = x qthen [u ← qd u, qgo $$LEADERS$];
∂(n+8)v ← u;
∂(n+4)$$INTERIORS:$
!fcnremv& ∂(n+8)qif qn qd v qthen qreturn u;
∂(n+8)qif [qad v = x] qthen [qd v ← qdd v; qgo  $$INTERIORS$];
∂(n+8)v ← qd v;
∂(n+8)qgo $$INTERIORS$]
.end

The program ⊗prune deletes all elements of the list ⊗u that
are members of the list ⊗seen.  Instead of using two loops, we
tack a dummy onto ⊗u so there will always be a previous element.
.begin nofill select 2 turnon "∂"
.n←20
.group


∂(n)prune[u,seen]
∂(n+4)qprog[[v]
∂(n+8)u ← qNIL . u;
∂(n+8)v ← u;
∂(n+4)ploop:
!fcnprune& ∂(n+8)qif qn qd v qthen qreturn qd u;
∂(n+8)qif [qad v ε seen] qthen [qd v ← qdd v; qgo  ploop];
∂(n+8)v ← qd v;
∂(n+8)qgo ploop]
.end

In order to make the removals go uniformly we begin by tacking a dummy element
qNIL onto ⊗u.  Thus the list is non-empty and the next element of interest is the
"second" element of the list currently pointed to.  
An example of a program using destructive operations in a controlled 
and "clean" manner is the ⊗editor given in Chapter {section MACHIN}.
Here the program to be edited is initially copied, then destructively
edited to save repeated and unnecessary reconstruction.

.cb Exercise
.item←0

#. Consider the following test programs:
.begin nofill select 2 turnon "∂"
.n←20
.group

∂(n)test3[] ← 
∂(n+4)  qprog[[]
∂(n+8)      x ← $$(A B)$;
∂(n+8)      y ← x;
∂(n+8)      qreturn x qequal [λside.x][qa y ← $$C$]]

∂(n)test4[] ← 
∂(n+4)  qprog[[]
∂(n+8)      x ← $$(A B)$;
∂(n+8)      y ← x;
∂(n+8)      qreturn copy x qequal [λside.x][qa y ← $$C$]]

∂(n)copy x ← qif qat x qthen x qelse [copy car x . copy cdr x]

.end
We have ⊗⊗copy_x_qequal_x_=qT⊗ and ⊗⊗test3[]=qT⊗ but ⊗⊗test4[]=qNIL⊗.
Explain this.

.ss(reent,Re-entrant List Structure.)

	So far we have only considered list structure in which all
⊗⊗car-cdr⊗ chains terminate in atoms.  Mathematically, infinite
list structures are also possible.  They can't be represented in
a computer, but since they can satisfy the LISP algebraic axioms, we
have had to exclude them in our axiomatization of LISP
by axiom schemata of induction.
Another possibility is the re-entrant list structure.  These can
be created by the $RPLACA and $RPLACD operations.  
Figure {fig imp1} shows some examples of re-entrant list structures.
For example, if the variable ⊗x has value $(A), executing the statement
⊗⊗qa x ← x⊗ 
gives rise to a circular structure (i) while if we
execute ⊗⊗qd x ← x⊗ we get the structure (ii).  If we execute ⊗⊗qd x ← x⊗ with
⊗x as in (i) or ⊗⊗qa x ← x⊗ with ⊗x as in (ii) 
we get the doubly re-entrant structure (iii).   
Notice that in (i) we have ⊗⊗x_qeq_qa x⊗, in (ii) ⊗⊗x_qeq_qd x⊗ and in (iii)
we have ⊗⊗x_qeq_qa x⊗ and ⊗⊗x_qeq_qd x⊗.   These equations contradict the
induction schemata for lists and S-expressions.  Thus list structures
with destructive operations are not a model of the theory of lists and 
S-expressions given in Chapter {section PROVIN}.

	We can simulate recursive programs by replacing calls to the
program by a pointer to the lambda expression defining the program.
For example if we execute ⊗mkleft then the resulting value of ⊗left 
will have the structure shown in Figure {fig imp2}.

.begin nofill select 2 turnon "∂"
.n←15
.group


∂(n)mkleft[]
∂(n+4)qprog[[]
∂(n+8)left ← $$(LAMBDA (X) (COND ((ATOM X) X) (T (LEFT (CAR X))) ))$;
∂(n+8)qa qad qadd qadd left ← left;
∂(n+8)qreturn $$LEFT$]

.end

Also ⊗⊗apply[left_<x>]_=_left[x]⊗ where

!fcnleft&  ⊗⊗⊗left x ← qif qat x qthen x qelse left qa x⊗.

The reason for encasing the construction in a qprog even though the
assignments were made to non-prog variables was that most LISPs
(including MACLISP) do not check for re-entrant 
structures and an attempt to print a re-entrant list will cause an
infinite loop.  

.begin "imp1"
.group
.begin 
.verbatim; select C; preface 0; millpreface ← 0;

	⊂αααπααα⊃                   ⊂αααπααα⊃                       ⊂αααπααα⊃
  ααααα→~   ~   ~             ααααα→~   ~   ~              αααααααα→~   ~   ~
    ↑   %απα∀απα$               ↑   %απα∀απα$                ↑  ↑   %απα∀απα$
    ~     ~   ↓                 ~     ↓   ~                  ~  ~     ~   ~
    %ααααα$  NIL                ~     A   ~                  ~  %ααααα$   ~
				%ααααααααα$                  %αααααααααααα$


    (RPLACA X X)                (RPLACD X X)             (RPLACA (RPLACDA X X))

         (i)                        (ii)                          (iii)

.end 
!figimp1 Examples of re-entrant list structures.!
.apart
.end "imp1"

.begin "imp2"
.group
.begin 
.verbatim; select C; preface 0; millpreface ← 0;

    ⊂αααπααα⊃   ⊂αααπααα⊃   ⊂αααπααα⊃
ααα→~   ~   εαα→~   ~   εαα→~   ~   εαα→NIL
 ↑  %απα∀ααα$   %απα∀ααα$   %απα∀ααα$
 ~    ↓           ↓           ~   
 ~ LAMBDA       ⊂αααπααα⊃     ~  
 ~              ~   ~   ~     ~ 
 ~              %απα∀απα$     ~ 
 ~	          ↓   ↓       ~ 
 ~	          X  NIL      ~ 
 ~                            ~
 ~	     ⊂αααααααααααααααα$ 
 ~           ↓
 ~         ⊂αααπααα⊃   ⊂αααπααα⊃   ⊂αααπααα⊃
 ~	   ~   ~   εαα→~   ~   εαα→~   ~   εαα→NIL
 ~	   %απα∀ααα$   %απα∀ααα$   %απα∀ααα$
 ~	     ↓           ~           ↓
 ~	   COND          ~         ⊂αααπααα⊃   ⊂αααπααα⊃
 ~	                 ~         ~   ~   εαα→~   ~   εαα→NIL
 ~	                 ~         %απα∀ααα$   %απα∀ααα$
 ~	                 ~           ↓           ↓
 ~	                 ~           T         ⊂αααπααα⊃   ⊂αααπααα⊃
 ~	                 ↓                     ~   ~   εαα→~   ~   εαα→NIL
 ~	               ⊂αααπααα⊃   ⊂αααπααα⊃   %απα∀ααα$   %απα∀ααα$
 ~	               ~   ~   εαα→~   ~   ~     ~           ↓
 ~	               %απα∀ααα$   %απα∀απα$     ~         ⊂αααπααα⊃   ⊂αααπααα⊃
 ~	                 ~           ↓   ↓       ~         ~   ~   εαα→~   ~   ~
 ~	                 ~           X  NIL      ~         %απα∀ααα$   %απα∀απα$
 ~	                 ↓                       ~           ↓           ↓   ↓ 
 ~	               ⊂αααπααα⊃   ⊂αααπααα⊃     ~          CAR          X  NIL
 ~	               ~   ~   εαα→~   ~   ~     ~
 ~	               %απα∀ααα$   %απα∀απα$     ~
 ~	                 ↓           ↓   ↓       ~
 ~	                ATOM         X  NIL      ~
 ~	                                         ~
 ~	                                         ~
 %ααααααααααααααααααααααααααααααααααααααααααααααα$ 
.end 
.begin nofill 

⊗⊗⊗$$(SETQ LEFT '(LAMBDA (X) (COND ((ATOM X) X) (T (LEFT (CAR X))) )))$⊗
⊗⊗⊗$$(RPLACA (CADR (CADDR (CADDR LEFT))) LEFT)$⊗

.end

!figimp2 Another re-entrant list structure.!
.apart
.end "imp2"

	As we have just noted, re-entrant list structures do not satisfy the 
induction axioms given for S-expressions, and
the correctness of most of programs given in this book
depends on the data being non-re-entrant.  Re-entrant structures can
be represented by S-expressions provided certain atoms are reserved
for use as labels and a suitable convention is adopted for distinguishing
ordinary atoms from labels and pointers to the labels.  For example,
the structure of figure {fig imp1} (iii) might be represented 
by $$((LABEL_A).((POINT_A).(POINT_A)))$.

	Programs that compute with re-entrant structures need to keep lists of
vertices they have visited so that they can tell when they are about
to re-enter.
Thus the equivalence of merging list structure can be tested by the
predicate ⊗equiv defined by:

.begin nofill select 2 turnon "∂"
.group
.n←8


∂(n+8)        equiv[x, y] ← ¬[equiv1[x, y, qNIL] qeq $$LOSE$] 

∂(n+8)        equiv1[x, y, u] ←  
∂(n+12)           qif u qeq $$LOSE$ qthen $$LOSE$ 
∂(n+12)           qelse qif x qeq y ∨ match[x, y, u] qthen u 
!fcnequiv& ∂(n+12)           qelse qif qat x ∨ qat y ∨ unmatch[x, y, u] qthen $$LOSE$ 
∂(n+12)           qelse equiv1[qa x, qa y, equiv1[qd x, qd y, [x . y] . u]] 

∂(n+8)        match[x, y, u] ← ¬qn u ∧ [[x qeq qaa u ∧ y qeq qda u] ∨ match[x, y, qd u]] 

∂(n+8)        unmatch[x, y, u] ← ¬qn u ∧ [x qeq qaa u ∨ y qeq qda u ∨ unmatch[x, y, qd u]] 
.end

The argument ⊗u in ⊗equiv1 is a list of positions pairs seen so far in the parallel
traversal of ⊗x and ⊗y.  It is necessary to carry around information of this sort
in order to avoid repeating some part of the computation forever.
Note also the use of qeq.  The test we want to make is "have we been here before?"
not "have we seen this S-expression before?" as the program ⊗equal for
testing the latter would loop on re-entrant structures.

.skip
.cb Exercise
.item←0

#.  Consider the following definition of the fibonnaci function.  
.begin nofill turnon "∂" group
.n←8

∂(n)⊗⊗        fibon n ← qif n = $0 ∨ n = $1 qthen $1 qelse fibloop[n, $$(1 1)$]⊗
!fcnfibon&  
∂(n)⊗⊗        fibloop[n, l] ← ⊗
∂(n)⊗⊗            qif qn qdd l qthen ⊗
∂(n)⊗⊗                [qif n = $2 qthen qad rplacd[qd l, <qa l + qad l>]⊗
∂(n)⊗⊗                 qelse fibloop[sub1 n, rplacd[qd l, <qa l + qad l>]]]⊗
∂(n)⊗⊗            qelse qif n = $2 qthen qadd l qelse fibloop[n-$1, qd l]⊗
.end

After evaluating ⊗⊗fibon 5⊗  if we examine the definition on
the property list of ⊗fibon  it corresponds to the definition

⊗⊗⊗fibon n ← qif n = $0 ∨ n = $1 qthen $1 qelse fibloop[n, $$(1 1 2 3 5 8)$]⊗ .

Thus ⊗fibon seems to be getting smarter.  What has happened?
.ss(impureex,Exercises.)
.item←0

.cb Sequential programs


#.  Extend ⊗eval ({eqn evall!!}) to handle $PROG, $SETQ, $GO, and $RETURN.  
Is it necessary to change the evaluation of $$COND$s?   There are several
possible ways of handling labels.  Try to think of at least two and consider
the advantages and disadvantages of each.

.cb Property list and Special properties.

#.  The association-list argument of ⊗eval {eqn evall!!} provides a way of
associating a value property with any atom.  This simple version
of ⊗eval does not distinguish between values that are intended to
be applied as functions and simple values.  Replace the a-list
argument by a more general structure capable of associating arbitrary
properties with an atom.  Write appropriate accessing and modifying
functions (⊗get, ⊗putprop, and ⊗⊗remprop⊗) and modify ⊗eval where
necessary, including a clause for $DEFUN.  

#.  How might you extend the more sophisticated ⊗eval above to handle 
$PROG and related constructs.  Are there now new ways of treating 
statement labels?


.cb Macro definitions

#.  The macro definition feature of LISP described earlier is some what
clumsy to use in general.  Write your own macro definition maker that
takes a macro name, a parameter specification and a definition body 
and produces  an appropriate definition.  The parameter specification
could be a pattern containing variables that are to match various
kinds of expressions such as atoms, segments of lists, arbitrary expressions,
expressions satisfying some predicate.  The pattern would also contain
constants, etc..  The macro definition produced 
would then generate code by matching the call to the pattern to obtain
values of the variables and then filling them in where they occur in the body.
For example given a definition such as
.begin nofill select 6 indent 12,12

(MACRO-MAKER FOR (?I IN ??LIST DO *STMTS)
    (PROG (?I L) 
	(SETQ L ??LIST)
       LOOP 
	(COND ((NULL L) (RETURN NIL))) 
	(SETQ ?I (CAR L))
	(SETQ L (CDR L))
	*STMTS
	(GO LOOP)) )
.end

A call to this macro such as 
.begin nofill select 6 indent 12,12

(FOR X IN '(A B C) DO 
  (COND ((GET X 'FN) (APPLY (GET X 'FN) NIL))) 
  (PRINT X))
.end
would result in the following code to be evaluated 
.begin nofill select 6 indent 12,12

(PROG (X L) 
    (SETQ L '(A B C))
   LOOP 
    (COND ((NULL L) (RETURN NIL))) 
    (SETQ X (CAR L))
    (SETQ L (CDR L))
    (COND ((GET X 'FN) (APPLY (GET X 'FN) NIL))) 
    (PRINT X)
    (GO LOOP)) 
.end
The task then is to write the program ⊗macro-maker which will put a suitable
macro definition on the property list of the indicated atom.

.cb Structure modifying functions

#. Write a function that reverses a list by reversing the pointers and thus not 
copying the list.   How could you prove your function correct?  (The latter is
a non-trivial task to carry out formally.)

.cb Re-entrant list structures.

#. Write a program to list the atoms in a possibly re-entrant
list structure.

#. Write a program to translate a graph description from
the notation of Chapter I to a re-entrant list structure.
Write a program to translate the other way.

#. Write a function ⊗mk-label which takes a name (atom) and an expression
as arguments and replaces all occurrences of the name by a pointer to the
expression itself.  When applied to a lambda-expression this creates
a self-referential expression which when applied to an appropriate list
of arguments returns the same value as if ⊗label[name,expression] had
been applied to the same arguments thus simulating the ⊗label construct.

#. Write functions to translate in each direction between
possibly re-entrant structures and the S-expressions corresponding to
them using the $LABEL, $POINTER notation described above.

#. Write an axiom schema of induction for finite possibly
re-entrant list structures.  Hint: The schema can be based on an
assertion of convergence of a program that scans a list structure keeping
track of the vertices it has already visited.